7 research outputs found

    Single Camera Face Position-Invariant Driver’s Gaze Zone Classifier Based on Frame-Sequence Recognition Using 3D Convolutional Neural Networks

    No full text
    Estimating the driver’s gaze in a natural real-world setting can be problematic for different challenging scenario conditions. For example, faces will undergo facial occlusions, illumination, or various face positions while driving. In this effort, we aim to reduce misclassifications in driving situations when the driver has different face distances regarding the camera. Three-dimensional Convolutional Neural Networks (CNN) models can make a spatio-temporal driver’s representation that extracts features encoded in multiple adjacent frames that can describe motions. This characteristic may help ease the deficiencies of a per-frame recognition system due to the lack of context information. For example, the front, navigator, right window, left window, back mirror, and speed meter are part of the known common areas to be checked by drivers. Based on this, we implement and evaluate a model that is able to detect the head direction toward these regions having various distances from the camera. In our evaluation, the 2D CNN model had a mean average recall of 74.96% across the three models, whereas the 3D CNN model had a mean average recall of 87.02%. This result show that our proposed 3D CNN-based approach outperforms a 2D CNN per-frame recognition approach in driving situations when the driver’s face has different distances from the camera

    Acquisition of Inducing Policy in Collaborative Robot Navigation Based on Multiagent Deep Reinforcement Learning

    No full text
    To avoid inefficient movement or the freezing problem in crowded environments, we previously proposed a human-aware interactive navigation method that uses inducement, i.e., voice reminders or physical touch. However, the use of inducement largely depends on many factors, including human attributes, task contents, and environmental contexts. Thus, it is unrealistic to pre-design a set of parameters such as the coefficients in the cost function, personal space, and velocity in accordance with the situation. To understand and evaluate if inducement (voice reminder in this study) is effective and how and when it must be used, we propose to comprehend them through multiagent deep reinforcement learning in which the robot voluntarily acquires an inducing policy suitable for the situation. Specifically, we evaluate whether a voice reminder can improve the time to reach the goal by learning when the robot uses it. Results of simulation experiments with four different situations show that the robot could learn inducing policies suited for each situation, and the effectiveness of inducement is greatly improved in more congested and narrow situations

    Proposal and Preliminary Feasibility Study of a Novel Toroidal Magnetorheological Piston

    No full text

    A Coordinated Wheeled Gas Pipeline Robot Chain System Based on Visible Light Relay Communication and Illuminance Assessment

    No full text
    The gas pipeline requires regular inspection since the leakage brings damage to the stable gas supply. Compared to current detection methods such as destructive inspection, using pipeline robots has advantages including low cost and high efficiency. However, they have a limited inspection range in the complex pipe owing to restrictions by the cable friction or wireless signal attenuation. In our former study, to extend the inspection range, we proposed a robot chain system based on wireless relay communication (WRC). However, some drawbacks still remain such as imprecision of evaluation based on received signal strength indication (RSSI), large data error ratio, and loss of signals. In this article, we thus propose a new approach based on visible light relay communication (VLRC) and illuminance assessment. This method enables robots to communicate by the ‘light signal relay’, which has advantages in good communication quality, less attenuation, and high precision in the pipe. To ensure the stability of VLRC, the illuminance-based evaluation method is adopted due to higher stability than the wireless-based approach. As a preliminary evaluation, several tests about signal waveform, communication quality, and coordinated movement were conducted. The results indicate that the proposed system can extend the inspection range with less data error ratio and more stable communication

    Compound locomotion control system combining crawling and walking for multi-crawler multi-arm robot to adapt unstructured and unknown terrain

    No full text
    Abstract How to improve task performance and how to control a robot in extreme environments when just a few sensors can be used to obtain environmental information are two of the problems for disaster response robots (DRRs). Compared with conventional DRRs, multi-arm multi-flipper crawler type robot (MAMFR) have high mobility and task-execution capabilities. Because, crawler robots and quadruped robots have complementary advantages in locomotion, therefore we have the vision to combine both of these advantages in MAMFR. Usually, MAMFR (like four-arm four-flipper robot OCTOPUS) was designed for working in extreme environments such as that with heavy smoke and fog. Therefore it is a quite necessary requirement that DRR should have the ability to work in the situation even if vision and laser sensors are not available. To maximize terrains adaption ability, self-balancing capability, and obstacle getting over capability in unstructured disaster site, as well as reduce the difficulty of robot control, we proposed a semi-autonomous control system to realize this compound locomotion method for MAMFRs. In this control strategy, robot can explore the terrain and obtain basic information about the surrounding by its structure and internal sensors, such as encoder and inertial measurement unit. Except that control system also can recognize the relative positional relationship between robot and surrounding environment through its arms and crawlers state when robot moving. Because the control rules is simple but effective, and each part can adjust its own state automatically according to robot state and explored terrain, MRMFRs have better terrain adaptability and stability. Experimental results with a virtual reality simulator indicated that the designed control system significantly improved stability and mobility of robot in tasks, it also indicated that robot can adapt complex terrain when controlled by designed control system
    corecore